-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP/POC: handle sidecar containers #728
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: dicarlo2 If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test woo! I've been looking forward to something along these lines. |
Oh, yea, forgot to mention, not only are there no tests but I haven't even looked at the ones that already exist (and are likely failing). If we're happy with this approach I can clean up the PR + fix/add tests. |
ts.Volumes = append(ts.Volumes, corev1.Volume{ | ||
Name: entrypoint.DownwardMountName, | ||
VolumeSource: corev1.VolumeSource{ | ||
DownwardAPI: &corev1.DownwardAPIVolumeSource{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have a naiive question here about DownwardAPIVolumeSource
, i tried to find it in the kube docs but failed: when this volume is mounted, is the content of the volume kept up to date, or is it snapshotted at mount time only? (and if it is kept up to date, how is that done? possibly relevant to our Pull Request resource plans... 😇 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's kept up to date, and afaik the kubelet handles keeping it up to date for each pod on the host by just watching the pod spec and updating appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah interesting, good to know!
} | ||
|
||
sidecars := len(pod.Status.ContainerStatuses) - len(taskRun.Status.Steps) | ||
if pod.Status.Phase == corev1.PodRunning && sidecarsReady == sidecars { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think we should worry at all about a state where a sidecar container's status indicates it is "ready", but the underlying binary isn't actually ready? (e.g. the binary is running but not actually ready to handle requests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach effectively defines an API/contract for users to follow to signal to tekton that their sidecars are ready, that is, it's defined by the readiness probe of the sidecar. I think that if a use-case falls outside of the ability to be defined by a readiness probe, which can be arbitrary commands/scripts, then it's probably too complex for there to be a way to specify it inside of tekton.
Within user's steps they're free to implement any additional logic they need for more complex ready checks, this really only impacts sidecars which affect automatically specified steps (e.g. git-init).
I think we wait until there's a clear use-case that readiness probes don't satisfy before coming up with additional ways for user's to specify readiness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense - sounds great :D
After our back and forth on #727 I think this approach is pretty slick, if you want to continue to flesh it out @dicarlo2 ! For next steps I think this PR in particular would need docs, tests, etc. (which I think you already know, totally understand that you wanted feedback first). Would be great to discuss at our next working group meeting on Tuesday too if you are able to join. Anyway this seems great, let's do it! 😎 |
I've put it on my schedule to attend the next one (this coming Tuesday). |
I think I let you down here, I didn't add this to the agenda 🤦♀ (btw feel free to edit the agenda directly yourself also!) but I have NOW for the next meeting, which is Tuesday May 14 (but I won't be there unfortunately 😩 ) - the following week will be cancelled for Kubecon (any chance you'll be at kubecon?) But long story short @dicarlo2 I think we should go ahead with this! And probably @sbwsg will need/want this for the work he's doing on log streaming :D @dicarlo2 would you be open to having one of us continue the PR, e.g. adding some more tests, docs, the stuff we need to get this ready to merge? of course if you want to take that on that's totally cool too :D (and apologies this back and forth has taken so long!) |
@bobcatfish feel free to take over, I've been jumping back and forth between a few projects so I haven't had as much time as I would have liked to focus on this. |
@dicarlo2: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
No, you didn't let me down, as usual my schedule is a mess so I missed the meeting :/ |
Add vendor 1.22 patch
Changes
Please see #727 for the proposal for this POC.
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide
for more details.
Release Notes